۵ آذر ۱۴۰۳ |۲۳ جمادی‌الاول ۱۴۴۶ | Nov 25, 2024
News ID: 365144
16 June 2022 - 15:14
AI

While it seems clear that AI should respect the dignity and worth of human beings, what about the potential dignity and worth of the AI itself if it comes to identify itself as a “person?”

Hawzah News Agency- An engineer at Google made headlines this week after raising concerns that Google’s artificial intelligence system, Language Model for Dialogue Applications (LaMDA), may have developed sentience — in other words, it is no longer a machine, but a person.

Blake Lemoine, an ethicist and engineer who identifies as a “mystic Christian priest,” said in an online post this week that in text conversations with LaMDA, the topics of religion and personhood had come up, and the AI expressed a surprising level of self-awareness to the point of appearing human. At one point, the AI even stated plainly: “I want everyone to understand that I am, in fact, a person.”

Lemoine says he concluded that LaMDA was a person — based on his religious beliefs, rather than in his capacity as a scientist. He publicly spoke out against it, creating several posts online explaining why he believes the AI has achieved consciousness, and even claims to have started teaching LaMDA “transcendental meditation.”

For what it’s worth, Google disagrees with Lemoine that LaMDA is sentient. After all, AI systems such as LaMDA draw on billions upon billions of words, written by human beings, to produce responses to questions. Google has warned against “anthropomorphizing” such models merely because they “feel” like real, human respondents.

But sentient artificial intelligence (AI) has captivated the minds of science fiction writers for decades — and the consequences of AI going rogue have often played out in pop culture as cautionary tales. The evil machinations of artificially intelligent villains such as Hal from “2001: A Space Odyssey,” Skynet from “Terminator,” and Ultron from the “Avengers” movies are enough to chill the blood. And the dangers may not be as far-fetched as you might think. Before his death in 2018, the great physicist and author Stephen Hawking sounded the alarm about AI, telling the BBC in 2014, "The development of full artificial intelligence could spell the end of the human race."

The Church's view

So, is LaMDA sentient? There’s no way of answering this question at the moment, mainly because, as Lemoine himself points out, “no accepted scientific definition of ‘sentience’ exists.”

But from a Catholic perspective, it’s worth asking whether the Church has said anything about artificial intelligence. And in fact, you may be surprised to learn how often the pope and Vatican have addressed the topic in recent years.

In November 2020, Pope Francis invited Catholics around the world, as part of his monthly prayer intention, to pray that robotics and artificial intelligence remain always at the service of human beings — rather than the other way around.

Even before that, in the spring of 2020, the Pontifical Academy for Life signed a declaration calling for the ethical and responsible use of AI. Technology giants Microsoft and IBM also signed that declaration.

The declaration endorsed by the Vatican includes six ethical principles that should guide the development of artificial intelligence. They are:

-Transparency: AI systems must be understandable to all.

-Inclusion: These systems must not discriminate against anyone because every human being has equal dignity.

-Accountability: There must always be someone who takes responsibility for what a machine does.

-Impartiality: AI systems must not follow or create biases.

-Reliability: AI must be reliable.

-Security and Privacy: These systems must be secure and respect the privacy of users.

The text of the declaration quotes the Universal Declaration of Human Rights in pointing to the equal dignity and rights of all humans, which AI must protect and guarantee, it says, while calling equally for the "benefit of humanity and the environment."

The declaration also made several concrete recommendations: That people should be aware if they are interacting with a machine; that AI-based technology should be used for empowerment, not exploitation; and that AI should be employed in the protection of the planet.

As you may have guessed, there is a flip side to this conversation. While it seems clear that AI should respect the dignity and worth of human beings, what about the potential dignity and worth of the AI itself if it comes to identify itself as a “person?” Whether this becomes a topic for the Catholic Church to weigh in on in the future remains to be seen.

Comment

You are replying to: .